4 - PDE based Image Processing [ID:42078]
50 von 843 angezeigt

Thanks a lot.

Well, I guess I want you

to look behind.

I think we start.

So just a little recap. We discussed what happens if we apply the heat equation to smooth an image F.

We define the Fourier transform as an integral e to the ip dot x u of x dx.

And we have seen that the Fourier transform of Laplace u

is equal to minus p squared times the Fourier transform of u.

So we solve an ODE.

So after Fourier transform we have somehow the only transform in x so the time derivative stays the same.

Then we have minus p squared u hat which led us to u hat given by f hat the initial value times e to the minus p squared times t.

We can also write this as f u as the Fourier transform of u and then we introduced the inverse Fourier transform

with one or some normalization constant times the integral e to the minus i p x v of p dp.

And if we choose c in the right way we can

really get an inverse in the sense that u is f inverse of f of u or f inverse of u hat.

Now we go on from here and apply the inverse Fourier transform.

So that's the last step we had to do last time to get really a formula for u itself.

So we see the solution of the heat equation.

U of x t is given by the inverse Fourier transform of the Fourier transform of f times e to the minus p squared t.

Okay and if you write that detail becomes integral over rd e to the i p x.

We have directly this e to the minus p squared times t.

And then we have the Fourier transform of f which we now write explicitly this is an integral over rd e to the minus i p times y f of y dy dp.

Okay so now we change the order of the integrals.

So we have first an integral of f and then we had the integral of, we collect the terms a bit differently,

it's e to the i p times x minus y so we have one here one here and then we have minus p squared times t dp and then we integrate this back to f.

Okay and we said this is a function of x minus y and t here we have the initial value so we directly get the following form.

What we call a convolution so we have a convolution kernel G which changes with time and we convolve the initial value with this G and G is given.

Okay now I only write one variable x instead of x minus y integral over rd e to the i p x minus p squared t integrated with respect to p.

And a normalization constant.

So our last step is to compute this integral and we see here minus p squared t this is actually also i p times t squared.

Okay by the way be careful when I write p squared now I always mean of course in multiple dimensions norm of p squared so a bit sloppy today but we've written it correctly last time.

Okay yes.

Okay.

And now we make a little change of variables and we expand here to get a complete square so what you see is.

I did briefly i p squared of t squared plus i p x okay I write this as 2 i p x divided by 2.

Is the same as i p squared of t plus x divided by 2 square root of t squared.

Correctly okay take the norm and then I've added a squared term in x which I need to subtract so I need to subtract x squared divided by 4t.

Okay why is that useful.

Well, the nice thing is I call this now q as a new variable.

And then by the transformation serum, I get dq.

Okay, this is independent of p, this is a diagonal transform of p.

So the Jacobian determinant is just i times square root of t.

On every diagonal.

If I take the determinant I get i times square root of t to the power d.

And if I take the norm of the Jacobian I get just get square root of t to the power d.

That's dp.

So these are these.

So we can put.

We just write it briefly so we have here, e to the plus actually q squared.

Be careful actually we are not integrating on the on Rd because q is i times something.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:39:18 Min

Aufnahmedatum

2022-05-17

Hochgeladen am

2022-05-17 16:39:05

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen